32 research outputs found

    Tamponnade péricardique révélant un lupus érythémateux systémique au cours d’une leishmaniose viscérale atypique

    Get PDF
    L'association de leishmaniose viscérale (LV) et lupus érythémateux systémique (LES) est rare et grave. La similitude des signes cliniques peut être responsables de difficultés diagnostiques surtout quand les deux  pathologies se déclarent de façon concomitante. Nous rapportons l'observation d'une malade qui avait présenté une association grave et atypique de LV et LES qui se sont déclarés en même temps. La LV était atypique par l'absence du syndrome tumoral, de la fièvre et la négativité de la sérologie leishmaniose. Le LES jusque là méconnu était révélé par une poussée sévère avec différentes atteintes viscérales. L'évolution était favorable sous amphotéricine-B et corticothérapie. En cas d'association des deux pathologies une surveillance particulière est nécessaire à fin de diagnostiquer à temps une atteinte viscérale grave du LES et démarrer précocement une prise en charge étiologique et symptomatique qui est le seul garant de l'évolution favorable.Key words: Leishmaniose viscérale, lupus érythémateux systémique, association, corticothérapi

    Cross-layer system reliability assessment framework for hardware faults

    Get PDF
    System reliability estimation during early design phases facilitates informed decisions for the integration of effective protection mechanisms against different classes of hardware faults. When not all system abstraction layers (technology, circuit, microarchitecture, software) are factored in such an estimation model, the delivered reliability reports must be excessively pessimistic and thus lead to unacceptably expensive, over-designed systems. We propose a scalable, cross-layer methodology and supporting suite of tools for accurate but fast estimations of computing systems reliability. The backbone of the methodology is a component-based Bayesian model, which effectively calculates system reliability based on the masking probabilities of individual hardware and software components considering their complex interactions. Our detailed experimental evaluation for different technologies, microarchitectures, and benchmarks demonstrates that the proposed model delivers very accurate reliability estimations (FIT rates) compared to statistically significant but slow fault injection campaigns at the microarchitecture level.Peer ReviewedPostprint (author's final draft

    SyRA: early system reliability analysis for cross-layer soft errors resilience in memory arrays of microprocessor systems

    Get PDF
    © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Cross-layer reliability is becoming the preferred solution when reliability is a concern in the design of a microprocessor-based system. Nevertheless, deciding how to distribute the error management across the different layers of the system is a very complex task that requires the support of dedicated frameworks for cross-layer reliability analysis. This paper proposes SyRA, a system-level cross-layer early reliability analysis framework for radiation induced soft errors in memory arrays of microprocessor-based systems. The framework exploits a multi-level hybrid Bayesian model to describe the target system and takes advantage of Bayesian inference to estimate different reliability metrics. SyRA implements several mechanisms and features to deal with the complexity of realistic models and implements a complete tool-chain that scales efficiently with the complexity of the system. The simulation time is significantly lower than micro-architecture level or RTL fault-injection experiments with an accuracy high enough to take effective design decisions. To demonstrate the capability of SyRA, we analyzed the reliability of a set of microprocessor-based systems characterized by different microprocessor architectures (i.e., Intel x86, ARM Cortex-A15, ARM Cortex-A9) running both the Linux operating system or bare metal. Each system under analysis executes different software workloads both from benchmark suites and from real applications.Peer ReviewedPostprint (author's final draft

    Analyser et supporter le processus de prise de décision dans la fiabilité des systèmes informatiques avec un framework d'évaluation de fiabilité

    No full text
    Reliability has become an important design aspect for computing systems due to the aggressive technology miniaturization and the increase of the non interrupted performance that introduce a large set of failure sources for hardware components. The hardware system can be affected by faults caused by physical manufacturing defects or environmental perturbations such as electromagnetic interference, external radiations, or high-energy neutrons from cosmic rays and alpha particles. For embedded systems and systems used in safety critical fields such as avionic, aerospace and transportation, the presence of these faults can damage their components and can lead to catastrophic failures. Investigating new methods to evaluate the systemreliability helps designers to understand the effect of faults on the system, and thus to develop reliable and dependable products. Depending on the design phase of the system, the development of reliability evaluation methods can save the design costs and efforts, and will positively impact product time to-market.The main objective of this thesis is to develop new techniques to evaluate the reliability of complex computing system running a software. The evaluation targets faults leading to soft errors. These faults can propagate through the different layers composing the full system. They can be masked during this propagation either at the technological or at the architectural level. When a fault reaches the software layer of the system, it can corrupt its data, its instructions or the control flow. These errors may impact the correct software execution by producing erroneous results or preventing the application execution and leading to abnormal termination or application hang.In this thesis, the reliability of the different software components is analyzed at different levels of the system (depending on the design phase), emphasizing the role that the interaction between hardware and software plays in the overall system. Then, the reliability of the system is evaluated via a flexible, fast and accurate evaluation framework. Finally, the reliability decision-making process in computing systems is comprehensively supported with the developed framework (methodologies and tools).La fiabilité est un aspect important de conception des systèmes informatiques suite à la miniaturisation agressive de la technologie ainsi que le fonctionnement non interrompu qui introduisent un grand nombre de sources de défaillance des composantes matérielles. Le système matériel peut être affecté par des fautes causées par des défauts de fabrication ou de perturbations environnementales telles que les interférences électromagnétiques, les radiations externes ou les neutrons de haute énergie des rayons cosmiques et des particules alpha. Pour les systèmes embarqués et systèmes utilisés dans les domaines critiques pour la sécurité tels que l’avionique, l’aérospatiale et le transport, la présence de ces fautes peut endommager leurs composantes et conduire à des défaillances catastrophiques du systèmes. L’étude de nouvelles méthodes pour évaluer la fiabilité du système permet d’aider les concepteurs à comprendre les effets des fautes sur le système, et donc de développer des produits fiables et sûrs. En fonction de la phase deconception du système, le développement de méthodes d’évaluation de la fiabilité peut réduire les coûts et les efforts de conception. Ainsi, il aura un impact positif sur le temps de mise en marché du produit.L’objectif principal de cette thèse est de développer de nouvelles techniques pour évaluer la fiabilité globale du système informatique complexe. L’évaluation vise les fautes conduisant à des erreurs dites "soft". Ces fautes peuvent se propager à travers les différentes structures qui composent le système jusqu’à provoquer une défaillance du logiciel. Elles peuvent être masquées lors de cette propagation soit au niveau technologique ou architectural. Quand la faute atteint la partie logicielle du système, elle peut endommager ses données, ses instructions ou le contrôle de flux. Ces erreurs peuvent avoir un impact sur l’exécution correcte du logiciel en produisant des résultats erronés ou empêchant l’exécution de l’application.Dans cette thèse, la fiabilité des différentes composantes logiciels est analysée à différents niveaux du système (en fonction de la phase de conception), mettant l’accent sur le rôle que l’interaction entre le matériel et le logiciel joue dans le système global. Ensuite, la fiabilité du système est évaluée grâce à des méthodologies d’évaluation flexible, rapide et précise. Enfin, le processus de prise de décision pour la fiabilité des systèmes informatiques est pris en charge avec les méthodes et les outils développés

    A survey on simulation-based fault injection tools for complex systems

    No full text
    International audienceDependability is a key decision factor in today's global business environment. A powerful method that permits to evaluate the dependability of a system is the fault injection. The principle of this approach is to insert faults into the system and to monitor its responses in order to observe its behavior in the presence of faults. Several fault injection techniques and tools have been developed and experimentally tested. They could be mainly grouped into three categories: hardware fault injection, simulation-based fault injection, and emulation-based fault injection. This paper presents a survey on the simulation-based fault injection techniques, with a focus on complex micro-processor based systems

    Efficient 8-bit matrix multiplication on Computational SRAM architecture

    No full text
    International audienceThis paper proposes a micro-kernel to efficiently compute 4x4 8-bit matrix multiplication on In-Memory Computing (IMC) Architectures with 128-bit word-lines. The proposed implementation requires simple instructions with vector-data computation and could be used as a basic block to implement General Matrix Multiplication (GEMM) on 128-bit word-lines IMC architectures, using 4x4 matrix partitioning. This micro-kernel would be beneficial to domains such as image processing and computer graphics

    Toward Modeling Cache-Miss Ratio for Dense-Data-Access-Based Optimization

    Get PDF
    International audienceAdapting a source code to the specificity of its host hardware represents one way to implement software optimization. This allows to benefit from processors that are primarily designed to improve system performance. To reach such a software/hard-ware fitting without narrowing the scope of the optimization to few executions, one needs to have at his disposal relevant performance models of the considered hardware. This paper proposes a new method to optimize software kernels by considering their data-access mode. The proposed method permits to build a data-cache-miss model of a given application regarding its specific memory-access pattern. We apply our method in order to evaluate some custom implementations of matrix data layouts. To validate the functional correctness of the generated models, we propose a reference algorithm that simulates a ker-nel's exploration of its data. Experimental results show that the proposed data alignment permits to reduce the number of cache misses by a factor up to 50%, and to decrease the execution time by up to 30%. Finally, we show the necessity to integrate the impact of the Translation Lookaside Buffers (TLB) and the memory prefetcher within our performance models

    Software testing and software fault injection

    No full text
    International audienceReliability is one of the most important characteristics of the system quality. It is defined as the probability of failure-free operation of system for a specified period of time in a specified environment. For microprocessor based systems, reliability includes both software and hardware reliability. Many methods and techniques have been proposed in the literature so far to evaluate and test both software faults (e.g., Mutation Testing, Control Flow Testing, Data Flow Testing) and hardware faults (e.g. Fault Injection). In this paper, we present a survey of proposed techniques and methods to evaluate software and hardware reliability, and we study the possibility to explore them to evaluate the role of the software stack to evaluate system reliability face to hardware faults

    Instruction Set Design Methodology for In-Memory Computing through QEMU-based System Emulator

    No full text
    International audienceIn-Memory Computing (IMC) is a promising paradigm to mitigate the von Neumann bottleneck. However its evaluation on complete applications in the context of full-scale systems is limited by the complexity of simulation frameworks as well is the disjunction between hardware exploration and compiler support. This paper proposes a global exploration flow in the scale of Instruction Set Architectures (ISA) to perform both modeling and the generation of compiler support to perform ISA-level exploration. Our emulation methodology is based on QEMU, implements a performance model based on hardware characterizations from the State-of-the-Art, and allows the modeling of cache hierarchies, while our compiler support is automatically generated and based on a specialized compiler. We evaluate three applications in the domains of image processing and linear algebra on a reference IMC architecture, and analyze the obtained results to validate our methodology
    corecore